Question Answering with Recurrent Span Representations

نویسنده

  • Kevin Wu
چکیده

Using the Stanford Question and Answering Dataset (SQuAD) released by Rajpurkar et al. [1], we leverage a modified Recurrent Span Representation (RaSoR) model by Lee et al. [2] but with linear space complexity. This contrasts the original model by Lee et. al which requires a demanding quadratic space requirement to enumerate all answer spans. In addition, we introduce a novel architecture, Sequence Question-Induced Distances (SQuID) as an extension to our modified RaSoR implementation, which allows us to impose a prior on the answer span based on a question representation. Using our modified RaSoR implementation, we were still capable of obtaining good model accuracy, getting F1 and Exact Match (EM) scores of around 60% and 45% respectively on both the CodaLab dev and test set, with less memory and shorter time requirements. However, the results from our extension SQuID model are inconclusive; we believe that while imposing a question-induced prior on what the answer span may allow us to encode better answer distances, a generic question representation may not in itself be a very reliable indicator for the length of an answer span.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning Recurrent Span Representations for Extractive Question Answering

The reading comprehension task, that asks questions about a given evidence document, is a central problem in natural language understanding. Recent formulations of this task have typically focused on answer selection from a set of candidates pre-defined manually or through the use of an external NLP pipeline. However, Rajpurkar et al. (2016) recently released the SQUAD dataset in which the answ...

متن کامل

Global Span Representation Model for Machine Comprehension on SQuAD

Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions, relevant context and answers created by humans through crowdsourcing. Given this more realistic dataset, we focus on the question answering task of machine comprehension. For this problem, we ...

متن کامل

Learning Convolutional Text Representations for Visual Question Answering

Visual question answering is a recently proposed arti€cial intelligence task that requires a deep understanding of both images and texts. In deep learning, images are typically modeled through convolutional neural networks, and texts are typically modeled through recurrent neural networks. While the requirement for modeling images is similar to traditional computer vision tasks, such as object ...

متن کامل

Knowledge-Based Question Answering as Machine Translation

A typical knowledge-based question answering (KB-QA) system faces two challenges: one is to transform natural language questions into their meaning representations (MRs); the other is to retrieve answers from knowledge bases (KBs) using generated MRs. Unlike previous methods which treat them in a cascaded manner, we present a translation-based approach to solve these two tasks in one unified fr...

متن کامل

Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering

The problem of Visual Question Answering (VQA) requires joint image and language understanding to answer a question about a given photograph. Recent approaches have applied deep image captioning methods based on recurrent LSTM networks to this problem, but have failed to model spatial inference. In this paper, we propose a memory network with spatial attention for the VQA task. Memory networks ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017